Can we count on counting? A look at the validity of community engagement survey measures Ethan Kolek, Director of Evaluation, Amherst College [ekolek@amherst.edu]
Conference Track: Contexts and methods: Theoretical and conceptual frameworks, research designs, and methodological issues
Format: Research/Scholarly paper
Summary Recent studies have questioned the validity of many college student survey items (Hutchinson & Lovell, 2004; Pike, 2008; Porter 2011). Researchers use various types of surveys to collect data about students’ community engagement experiences. Among the crucial pieces of information for both academic and applied research on community engagement is the extent to which (or whether or not) a student participated in community engagement activities. This paper describes an exploratory study that sought to investigate the validity of several survey items related to students’ community engagement participation, using data from applied evaluation work at a single institution.
Surveys are one of the most common ways to collect data about college students’ experiences (Hutchinson & Lovell, 2004; Pike, 2008; Porter 2011). Unfortunately, the validity of many items used in surveys of college students has not been established (Porter, 2011). This exploratory study uses data collected at a single, highly selective, private liberal arts college in the Northeastern United States. Using matched data, I found that less than one-half of the students who had enrolled in a course with a community based learning component reported having done so on the survey.
In this paper, I explore other likely possibilities for survey item failure, for example the use of unclear terms. I raise the concern that, “community-based learning” and “service-learning” are inappropriate terms to use in surveys. In this paper, I suggest ways in which higher education surveys can construct items that describe courses or behaviors in ways that are understandable to all respondents. This session is intended to a) increase participants’ awareness about the complexity of survey item construction, b) provide insight into why some survey questions collect valid data whereas others do not, and c) propose several ways to minimize measurement error in future community engagement survey measures.
References Bowman, N. A. (2011a). Examining systematic errors in predictors of college student self-reported gains. New Directions for Institutional Research, 2011(150), 7–19.
Bowman, N. A. (2011b). Validity of self-reported gains at diverse institutions. Educational Researcher, 40(1), 22–24.
Bowman, N. A., & Bradenberger, J. W. (2010). Quantitative assessment of service-learning outcomes: Is self-reported change a reasonable proxy for longitudinal change? In J. Keshen, B. A. Holland, & B. E. Moeley (Eds.), Research for what? Making engaged scholarship matter (pp. 25–43). Charlotte, NC: Information Age.
Dillman, D. A., Smith, J. D., & Christian, L. M. (2009). Internet, mail and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: Wiley.
Fowler, F. L. (1995). Improving survey questions: design and evaluation. Thousand Oaks, CA: SAGE.
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Hoboken, NJ: Wiley.
Hutchinson, S. R., & Lovell, C. D. (2004). A review of methodological characteristics of research published in key journals in higher education: Implications for graduate research training. Research in Higher Education, 45, 383–403.
Pike, G. R. (2008). Using weighting adjustments to compensate for survey nonresponse. Research in Higher Education, 49(2), 153–171.
Porter, S. R. (2011). Do college student surveys have any validity? The Review of Higher Education, 35(1), 45–76.
Porter, S. R., Rumann, C., & Pontius, J. (2011). The validity of student engagement survey questions: Can we accurately measure academic challenge? New Directions for Institutional Research, 2011(150), 87–98.
Sudman, S., & Bradburn, N. M. (1982). Asking questions: A practical guide to questionnaire design. San Francisco, CA: Jossey-Bass.
Tanur, J. M. (1994). Questions on questions: Inquiries into the cognitive bases of surveys. Thousand Oaks, CA: SAGE.
To access materials from this session please click on the file link(s) below:
Can we count on counting? A look at the validity of community engagement survey measures
Ethan Kolek, Director of Evaluation, Amherst College [ekolek@amherst.edu]
Keywords: Survey validity, explorative study, applied evaluation, survey complexity, measurement error
Conference Track: Contexts and methods: Theoretical and conceptual frameworks, research designs, and methodological issues
Format: Research/Scholarly paper
Summary
Recent studies have questioned the validity of many college student survey items (Hutchinson & Lovell, 2004; Pike, 2008; Porter 2011). Researchers use various types of surveys to collect data about students’ community engagement experiences. Among the crucial pieces of information for both academic and applied research on community engagement is the extent to which (or whether or not) a student participated in community engagement activities. This paper describes an exploratory study that sought to investigate the validity of several survey items related to students’ community engagement participation, using data from applied evaluation work at a single institution.
Surveys are one of the most common ways to collect data about college students’ experiences (Hutchinson & Lovell, 2004; Pike, 2008; Porter 2011). Unfortunately, the validity of many items used in surveys of college students has not been established (Porter, 2011). This exploratory study uses data collected at a single, highly selective, private liberal arts college in the Northeastern United States. Using matched data, I found that less than one-half of the students who had enrolled in a course with a community based learning component reported having done so on the survey.
In this paper, I explore other likely possibilities for survey item failure, for example the use of unclear terms. I raise the concern that, “community-based learning” and “service-learning” are inappropriate terms to use in surveys. In this paper, I suggest ways in which higher education surveys can construct items that describe courses or behaviors in ways that are understandable to all respondents. This session is intended to a) increase participants’ awareness about the complexity of survey item construction, b) provide insight into why some survey questions collect valid data whereas others do not, and c) propose several ways to minimize measurement error in future community engagement survey measures.
References
Bowman, N. A. (2011a). Examining systematic errors in predictors of college student self-reported gains. New Directions for Institutional Research, 2011(150), 7–19.
Bowman, N. A. (2011b). Validity of self-reported gains at diverse institutions. Educational Researcher, 40(1), 22–24.
Bowman, N. A., & Bradenberger, J. W. (2010). Quantitative assessment of service-learning outcomes: Is self-reported change a reasonable proxy for longitudinal change? In J. Keshen, B. A. Holland, & B. E. Moeley (Eds.), Research for what? Making engaged scholarship matter (pp. 25–43). Charlotte, NC: Information Age.
Dillman, D. A., Smith, J. D., & Christian, L. M. (2009). Internet, mail and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: Wiley.
Fowler, F. L. (1995). Improving survey questions: design and evaluation. Thousand Oaks, CA: SAGE.
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Hoboken, NJ: Wiley.
Hutchinson, S. R., & Lovell, C. D. (2004). A review of methodological characteristics of research published in key journals in higher education: Implications for graduate research training. Research in Higher Education, 45, 383–403.
Pike, G. R. (2008). Using weighting adjustments to compensate for survey nonresponse. Research in Higher Education, 49(2), 153–171.
Porter, S. R. (2011). Do college student surveys have any validity? The Review of Higher Education, 35(1), 45–76.
Porter, S. R., Rumann, C., & Pontius, J. (2011). The validity of student engagement survey questions: Can we accurately measure academic challenge? New Directions for Institutional Research, 2011(150), 87–98.
Sudman, S., & Bradburn, N. M. (1982). Asking questions: A practical guide to questionnaire design. San Francisco, CA: Jossey-Bass.
Tanur, J. M. (1994). Questions on questions: Inquiries into the cognitive bases of surveys. Thousand Oaks, CA: SAGE.
To access materials from this session please click on the file link(s) below: